119 research outputs found

    Ridgelets and the representation of mutilated Sobolev functions

    Get PDF
    We show that ridgelets, a system introduced in [E. J. Candes, Appl. Comput. Harmon. Anal., 6(1999), pp. 197–218], are optimal to represent smooth multivariate functions that may exhibit linear singularities. For instance, let {u · x − b > 0} be an arbitrary hyperplane and consider the singular function f(x) = 1{u·x−b>0}g(x), where g is compactly supported with finite Sobolev L2 norm ||g||Hs, s > 0. The ridgelet coefficient sequence of such an object is as sparse as if f were without singularity, allowing optimal partial reconstructions. For instance, the n-term approximation obtained by keeping the terms corresponding to the n largest coefficients in the ridgelet series achieves a rate of approximation of order n−s/d; the presence of the singularity does not spoil the quality of the ridgelet approximation. This is unlike all systems currently in use, especially Fourier or wavelet representations

    What is...a Curvelet?

    Get PDF
    Energized by the success of wavelets, the last two decades saw the rapid development of a new field, computational harmonic analysis, which aims to develop new systems for effectively representing phenomena of scientific interest. The curvelet transform is a recent addition to the family of mathematical tools this community enthusiastically builds up. In short, this is a new multiscale transform with strong directional character in which elements are highly anisotropic at fine scales, with effective support shaped according to the parabolic scaling principle length^2 ~ width

    How well can we estimate a sparse vector?

    Get PDF
    The estimation of a sparse vector in the linear model is a fundamental problem in signal processing, statistics, and compressive sensing. This paper establishes a lower bound on the mean-squared error, which holds regardless of the sensing/design matrix being used and regardless of the estimation procedure. This lower bound very nearly matches the known upper bound one gets by taking a random projection of the sparse vector followed by an 1\ell_1 estimation procedure such as the Dantzig selector. In this sense, compressive sensing techniques cannot essentially be improved

    An Introduction To Compressive Sampling [A sensing/sampling paradigm that goes against the common knowledge in data acquisition]

    Get PDF
    This article surveys the theory of compressive sampling, also known as compressed sensing or CS, a novel sensing/sampling paradigm that goes against the common wisdom in data acquisition. CS theory asserts that one can recover certain signals and images from far fewer samples or measurements than traditional methods use. To make this possible, CS relies on two principles: sparsity, which pertains to the signals of interest, and incoherence, which pertains to the sensing modality. Our intent in this article is to overview the basic CS theory that emerged in the works [1]–[3], present the key mathematical ideas underlying this theory, and survey a couple of important results in the field. Our goal is to explain CS as plainly as possible, and so our article is mainly of a tutorial nature. One of the charms of this theory is that it draws from various subdisciplines within the applied mathematical sciences, most notably probability theory. In this review, we have decided to highlight this aspect and especially the fact that randomness can — perhaps surprisingly — lead to very effective sensing mechanisms. We will also discuss significant implications, explain why CS is a concrete protocol for sensing and compressing data simultaneously (thus the name), and conclude our tour by reviewing important applications

    Highly Robust Error Correction by Convex Programming

    Get PDF
    This paper discusses a stylized communications problem where one wishes to transmit a real-valued signal x ∈ ℝ^n (a block of n pieces of information) to a remote receiver. We ask whether it is possible to transmit this information reliably when a fraction of the transmitted codeword is corrupted by arbitrary gross errors, and when in addition, all the entries of the codeword are contaminated by smaller errors (e.g., quantization errors). We show that if one encodes the information as Ax where A ∈ ℝ^(m x n) (m ≥ n) is a suitable coding matrix, there are two decoding schemes that allow the recovery of the block of n pieces of information x with nearly the same accuracy as if no gross errors occurred upon transmission (or equivalently as if one had an oracle supplying perfect information about the sites and amplitudes of the gross errors). Moreover, both decoding strategies are very concrete and only involve solving simple convex optimization programs, either a linear program or a second-order cone program. We complement our study with numerical simulations showing that the encoder/decoder pair performs remarkably well

    Recovering edges in ill-posed inverse problems: optimality of curvelet frames

    Get PDF
    We consider a model problem of recovering a function f(x1,x2)f(x_1,x_2) from noisy Radon data. The function ff to be recovered is assumed smooth apart from a discontinuity along a C2C^2 curve, that is, an edge. We use the continuum white-noise model, with noise level ε\varepsilon. Traditional linear methods for solving such inverse problems behave poorly in the presence of edges. Qualitatively, the reconstructions are blurred near the edges; quantitatively, they give in our model mean squared errors (MSEs) that tend to zero with noise level ε\varepsilon only as O(ε1/2)O(\varepsilon^{1/2}) as ε0\varepsilon\to 0. A recent innovation--nonlinear shrinkage in the wavelet domain--visually improves edge sharpness and improves MSE convergence to O(ε2/3)O(\varepsilon^{2/3}). However, as we show here, this rate is not optimal. In fact, essentially optimal performance is obtained by deploying the recently-introduced tight frames of curvelets in this setting. Curvelets are smooth, highly anisotropic elements ideally suited for detecting and synthesizing curved edges. To deploy them in the Radon setting, we construct a curvelet-based biorthogonal decomposition of the Radon operator and build "curvelet shrinkage" estimators based on thresholding of the noisy curvelet coefficients. In effect, the estimator detects edges at certain locations and orientations in the Radon domain and automatically synthesizes edges at corresponding locations and directions in the original domain. We prove that the curvelet shrinkage can be tuned so that the estimator will attain, within logarithmic factors, the MSE O(ε4/5)O(\varepsilon^{4/5}) as noise level ε0\varepsilon\to 0. This rate of convergence holds uniformly over a class of functions which are C2C^2 except for discontinuities along C2C^2 curves, and (except for log terms) is the minimax rate for that class. Our approach is an instance of a general strategy which should apply in other inverse problems; we sketch a deconvolution example

    Continuous Curvelet Transform: I. Resolution of the Wavefront Set

    Get PDF
    We discuss a Continuous Curvelet Transform (CCT), a transform f → Γf (a, b, θ) of functions f(x1, x2) on R^2, into a transform domain with continuous scale a > 0, location b ∈ R^2, and orientation θ ∈ [0, 2π). The transform is defined by Γf (a, b, θ) = {f, γabθ} where the inner products project f onto analyzing elements called curvelets γ_(abθ) which are smooth and of rapid decay away from an a by √a rectangle with minor axis pointing in direction θ. We call them curvelets because this anisotropic behavior allows them to ‘track’ the behavior of singularities along curves. They are continuum scale/space/orientation analogs of the discrete frame of curvelets discussed in Candès and Donoho (2002). We use the CCT to analyze several objects having singularities at points, along lines, and along smooth curves. These examples show that for fixed (x0, θ0), Γf (a, x0, θ0) decays rapidly as a → 0 if f is smooth near x0, or if the singularity of f at x0 is oriented in a different direction than θ_0. Generalizing these examples, we state general theorems showing that decay properties of Γf (a, x0, θ0) for fixed (x0, θ0), as a → 0 can precisely identify the wavefront set and the H^m- wavefront set of a distribution. In effect, the wavefront set of a distribution is the closure of the set of (x0, θ0) near which Γf (a, x, θ) is not of rapid decay as a → 0; the H^m-wavefront set is the closure of those points (x0, θ0) where the ‘directional parabolic square function’ S^m(x, θ) = ( ʃ|Γf (a, x, θ)|^2 ^(da) _a^3+^(2m))^(1/2) is not locally integrable. The CCT is closely related to a continuous transform used by Hart Smith in his study of Fourier Integral Operators. Smith’s transform is based on strict affine parabolic scaling of a single mother wavelet, while for the transform we discuss, the generating wavelet changes (slightly) scale by scale. The CCT can also be compared to the FBI (Fourier-Bros-Iagolnitzer) and Wave Packets (Cordoba-Fefferman) transforms. We describe their similarities and differences in resolving the wavefront set

    Continuous Curvelet Transform: II. Discretization and Frames

    Get PDF
    We develop a unifying perspective on several decompositions exhibiting directional parabolic scaling. In each decomposition, the individual atoms are highly anisotropic at fine scales, with effective support obeying the parabolic scaling principle length ≈ width^2. Our comparisons allow to extend Theorems known for one decomposition to others. We start from a Continuous Curvelet Transform f → Γ_f (a, b, θ) of functions f(x_1, x_2) on R^2, with parameter space indexed by scale a > 0, location b ∈ R^2, and orientation θ. The transform projects f onto a curvelet γ_(abθ), yielding coefficient Γ_f (a, b, θ) = f, _(γabθ); the corresponding curvelet γ_(abθ) is defined by parabolic dilation in polar frequency domain coordinates. We establish a reproducing formula and Parseval relation for the transform, showing that these curvelets provide a continuous tight frame. The CCT is closely related to a continuous transform introduced by Hart Smith in his study of Fourier Integral Operators. Smith’s transform is based on true affine parabolic scaling of a single mother wavelet, while the CCT can only be viewed as true affine parabolic scaling in euclidean coordinates by taking a slightly different mother wavelet at each scale. Smith’s transform, unlike the CCT, does not provide a continuous tight frame. We show that, with the right underlying wavelet in Smith’s transform, the analyzing elements of the two transforms become increasingly similar at increasingly fine scales. We derive a discrete tight frame essentially by sampling the CCT at dyadic intervals in scale a_j = 2^−j, at equispaced intervals in direction, θ_(jℓ), = 2π2^(−j/2)ℓ, and equispaced sampling on a rotated anisotropic grid in space. This frame is a complexification of the ‘Curvelets 2002’ frame constructed by Emmanuel Candès et al. [1, 2, 3]. We compare this discrete frame with a composite system which at coarse scales is the same as this frame but at fine scales is based on sampling Smith’s transform rather than the CCT. We are able to show a very close approximation of the two systems at fine scales, in a strong operator norm sense. Smith’s continuous transform was intended for use in forming molecular decompositions of Fourier Integral Operators (FIO’s). Our results showing close approximation of the curvelet frame by a composite frame using true affine paraboblic scaling at fine scales allow us to cross-apply Smith’s results, proving that the discrete curvelet transform gives sparse representations of FIO’s of order zero. This yields an alternate proof of a recent result of Candès and Demanet about the sparsity of FIO representations in discrete curvelet frames
    corecore